Skip to content

[WIP][SPARK-56661] Introducing logical and physical planning nodes for language-agnostic Spark UDFs#55768

Draft
sven-weber-db wants to merge 2 commits into
apache:masterfrom
sven-weber-db:sven-weber_data/spark-56661-catalyst-and-udf
Draft

[WIP][SPARK-56661] Introducing logical and physical planning nodes for language-agnostic Spark UDFs#55768
sven-weber-db wants to merge 2 commits into
apache:masterfrom
sven-weber-db:sven-weber_data/spark-56661-catalyst-and-udf

Conversation

@sven-weber-db
Copy link
Copy Markdown
Contributor

What changes were proposed in this pull request?

This PR introduces new logical and physical Catalyst nodes for language-agnostic User Defined Functions (UDF) as part of SPIP SPARK-55278, which proposes language-agnostic UDFs.

As a first step towards the goal of language-agnostic UDFs, we want to target mapPartition UDFs like pyspark.sql.DataFrame.mapInArrow, pyspark.RDD.mapPartitions, or pyspark.sql.DataFrame.mapInArrow. The overarching goal is to deprecate the current, language-specific Catalyst nodes (like mapInArrow). However, for now, the new nodes will exist in addition to the old ones until the new framework has reach maturity.

In summary, this PR introduces:

  • A new Catalyst Expression, ExternalUDFExpression, which captures language-agnostic UDF properties (payload, name, etc.)
  • A new Catalyst logical node, ExternalUDF, which serves as a base class for all language-agnostic UDF nodes
  • A new Catalyst logical node, MapPartitionExternalUDF, which is the new, language-agnostic map partition node
  • Catalyst physical nodes for both logical nodes
  • WorkerDispatcherManager - A manager class which manages UDF Dispatchers based on the target UDFWorkerSpecification

None of the changes introduced above are currently consumed in Spark.

Why are the changes needed?

This is the first step toward language-agnostic UDF execution for Spark. Existing physical and logical planning nodes need to be replaced eventually to achieve this goal as they make language-specific assumptions.

Does this PR introduce any user-facing change?

No

How was this patch tested?

New unit-tests were added.

Was this patch authored or co-authored using generative AI tooling?

Partially. However, the code was manually reviewed and adjusted.

Comment thread sql/core/src/main/scala/org/apache/spark/sql/classic/Dataset.scala Outdated
@sven-weber-db sven-weber-db force-pushed the sven-weber_data/spark-56661-catalyst-and-udf branch from 5a35dee to df7bde7 Compare May 11, 2026 14:28
* Creates a [[WorkerSession]] via [[SparkEnv#getExternalUDFDispatcher]]
* and registers cancellation on task failure. The provided function
* receives the session and must return the result iterator. Moreover,
* the function MUST close the session once all input data has been sent.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"all input data have been sent"
what does this mean , do you try to say all udf results have been consumed?

val session = dispatcher.createSession(securityScope)

// Make sure to cancel the session, if the task fails
taskContext.addTaskFailureListener { (_, _) =>
Copy link
Copy Markdown
Contributor

@haiyangsun-db haiyangsun-db May 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we may need to add another completion listener as well to call session.close()

The reason is that, spark doesn't have to consume the whole result iterator, e.g., in case of 'limit'. So if we rely on the iterator's last element being consumed, then we may miss the close.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems unused, either

  1. not introducing this class in this PR
  2. use it in MapPartitionsExternalUDFExec but give a f that throws unimplemented error.

DirectUnixSocketWorkerDispatcher, DirectWorkerProcess,
DirectWorkerSession}

/**
Copy link
Copy Markdown
Contributor

@haiyangsun-db haiyangsun-db May 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any chance we can reuse the testing dispatcher defined in https://github.com/apache/spark/blob/master/udf/worker/core/src/test/scala/org/apache/spark/udf/worker/core/DirectWorkerDispatcherSuite.scala (can be updated if necessary)? As that is supposed to be agnostic to a worker spec.

So we can reduce some duplication and in case of API changes, we need to only update one place.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good idea! I moved the TestDispatcher into a test-only shared file that can be reused here. There are still some parts of the implementation that remain in this suite, as this test relies on an actual socket connection, and the test in /udf/ only checks for file existence. It would be weird to move the logic from this test into /udf/ as well, as this logic is not consumed in the /udf package.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants